Signals Notes

Signals NotesZero Input ResponseCharacteristic FeaturesFinding for an LTI systemModels for electric componentsMethodSolving Differential Equations2nd Order homogeneous ODEs2nd Order Non-Homgenous ODEsEulers' EquationLegendre EquationPower Series SolutionsImpulse ResponseZero State ResponseConvolutionOutput of a system using convolutionDemonstrating this graphicallyInterconnected SystemsNatural and Forced ResponsesSummary of Convolution PropertiesOther PropertiesLaplace TransformsImportant Laplace Transforms:Inverse Laplace TransformLaplace Transform PropertiesTime propertiesFrequency propertiesScaling PropertyOther PropertiesAlternative for type and Initial and Final values TheoremsInitial Value TheoremFinal Value TheoremLaplace Transform for solving differential equationsLaplace Transform and transfer functionInitial Conditions in systemsCapacitorsInductorsFourier TransformUseful FunctionsFourier Transform PropertiesConnecting Laplace and Fourier TransformSystem StabilityLaplace vs FourierLaplaceFourierFrequency ResponseFrequency response of a system that causes a delay of secondsFrequency response of an ideal differentiatorFrequency response of an ideal integratorBode PlotsAmplitude ResponsePhase ResponseZeros, Poles, and FiltersGain enhancement by a single poleGain enhancement by a pair of complex conjugate rootsGain suppression by a pair of complex conjugate zerosPhase response due to a pair of complex conjugate poles or zerosSimplest lowpass filtersButterworth lowpass filterBandpass filtersBandstop filtersButterworth FiltersButterworth Filters with Sallen-KeySignal Transmission, Group Delay, and Windowing EffectsDistortionless transmissionGroup DelayParseval's TheoremWindowing and its effectsLobesRemedying TruncationSampling and QuantizationSampling TheroemSampling Therorem Mathimatical ProofAliasingSpectral FoldingAnti-Aliasing FilterPractical SamplingIdeal Signal ReconstructionPractical signal reconstructionScalar QuantisationQuantisation ErrorDiscrete Fourier TransformsSpectrum Sampling TheoremSpectral Interpolation FormulaProof of Spectral Interpolation FormulaDiscrete Fourier TransformAliasing and LeakageFormal Definition of Descrete Fourier TransformZ-TransformComparison of Laplace, Fourier, and Z-transformsSolving Z-Transform questionsMapping from s-plane to z-planeFinding the inverse z-transform in the case of complex poles

Zero Input Response

For a linear system:

Characteristic Features

The characteristic features of an LTI system are:

Finding for an LTI system

Models for electric components

Where

Method

Where is the input signal, you must write for all the impedance components in the circuit.

  1. First, differentiate both sides to get the loop equation to get an equation in the form:
  1. Then solve to get to get the zero impulse response (The zero impulse response may be in another form depending on whether the roots are repeated or complex etc.):
  1. To find the unknowns, use the initial conditions:

Solving Differential Equations

2nd Order homogeneous ODEs

For an equation in the form:

The characteristic equation is:

And we need to solve for and . The solution will be in different forms depending on and .

Use the initial conditions to find and .

2nd Order Non-Homgenous ODEs

For an equation in the form:

The solution will be in the form where is the complementary solution, and is the particular solution. We can use the method of undetermined coefficients which uses trial functions which are different depending on the form of .

These can be combined together if needed e.g:

If it is a sum, then is the sum of each option.

 

In the case of , find and , sub into the original ODE and find and etc.

Only solve for the initial conditions after you have found the general case.

For linear order ODEs, the order conditions apply for the order.

Eulers' Equation

For an equation in the form

You must use the chainrule to rewrite the equation:

This changes the equation into the form

This which is now a linear ODE so you can use other methods to solve it.

Legendre Equation

For an equation in the form:

Both this and Euler's' Equation work for higher order DEs.

Power Series Solutions

You must assume that:

You must then use Leibnitz' theorem:

For each term with and :

 

Impulse Response

Zero-state response assumes that the system is in a rest state, we need to know the impulse response to derive and understand this response.

The impulse response is the systems response when the input is the Dirac function () with all initial conditions zero at .

Any input can be broken into a sequence of narrow pulses that produce a system response. If a system is linear and time invariant then the system response to is the sum of its response to all narrow pulse components.

Given that a system is specified by the following differential equation:

And remembering that the general equation of a system is:

It can be shown that the impulse response is given by:

Where is the unit step function and is the solution to the homogeneous differential equation:

Where:

Zero State Response

The output at the time due to a shifted impulse with amplitude at time is multiplied by a shifted response at .

For an input:

If both and are known then can be easily found:

This is known as the convolution interval:

 

Convolution

Output of a system using convolution

The output of a system is:

Where is the input to the system, and is the impulse response of the system.

For a complex input , the output is:

The real and imaginary components are considered separately.

Demonstrating this graphically

A convolution of two signals can be thought of as the amount of overlap between two signals as they move across each other.

You can demonstrate this graphically by:

Note: Remember the variable of integration is and not .

Interconnected Systems

For the parallel connection of two systems with impulse responses and the output is:

For the series connection of two systems with impulse responses and the output is:

Note that the order of connection isn't important

Knowing that and that we can say that if the input of the system is the step function, (i.e. ) then the output of the signal, which is called the step response must be:

The total response is:

Natural and Forced Responses

The characteristic modes appear in the zero state response (because of the impact on ).

The and terms are collected together and called the natural response, whilst the remaining which is not a characteristic mode is called the forced response.

Summary of Convolution Properties

Other Properties

 

Laplace Transforms

The (bilateral) Laplace Transform for a signal is defined as:

In this course, the one sided transform is used:

The Laplace transform is a linear transform:

The Laplace transform of the dirac function is:

The Laplace transform of the unit step function is:

In order for the real part of must be positive . The above condition implies that the Laplace transform of a function might exist for certain values of and not all values of . The set of these values consists the so called Region of Convergence (ROC) of the Laplace transform.

The Laplace transform of the function is:

For the above to be true,

The Laplace transform of the following equation can be found in a similar way to above:

This is equal to:

Important Laplace Transforms:

Where

Inverse Laplace Transform

The Inverse Laplace Transform for transfer functions in the form of can be found using partial fractions:

Then the important Laplace transforms can be used to find the inverse Laplace transforms of each partial fraction and then using fact that the Laplace transform is linear, the terms can be added together.

The inverse Laplace transform is rarely used because it is feasible only for small dimensions.

Laplace Transform Properties

Time properties

The time properties of the Laplace transform are:

Sometimes the Laplace transform of a signal can be found using a combination of time differentiation and shifting properties. A complicated signal can be differentiated multiple times so that it becomes dirac spikes. Linear triangles differentiate to become square impulse waves which differentiate to become dirac spikes.

Frequency properties

The frequency properties of the Laplace transform are:

Scaling Property

The above shows that time compression of a signal by a factor a causes expansion of its Laplace transform in s-scale by the same factor.

Other Properties

poles of in the left hand plane

Alternative for type and

For Laplace Transforms af the type or the square in the denominator can introduce complexity.

To solve this there is a trick using the Laplace transform for sine.

Differentiating this with respect to gives:

Dividing by and using the transform of to get:

Letting for the desired result. Repeating the process by taking:

And differentiating with respect to again to obtain:

Hence:

Initial and Final values Theorems

Initial Value Theorem

So long as the Laplace transforms of and exist, and the power of the numerator of is less than the power of its denominator:

Final Value Theorem

So long as the Laplace transforms of and exist, and the poles of are all on the left plane or origin:

Laplace Transform for solving differential equations

Using the time differentiation property of the Laplace transform:

We can solve differential equations by converting them from the time domain into the frequency domain. This means we can solve simple algebraic equations in the frequency domain, and then use the reverse Laplace transform to get the solutions to the original differential equation.

Laplace Transform and transfer function

The Laplace transform of the impulse response of a system is called the transfer function of the system:

Where . Knowing the transfer function of a system, we can fully determine the behaviour of the system.

Initial Conditions in systems

In circuits, initial conditions may not be zero since components such as capacitors and inductors may contain charge or current.

Capacitors

For a capacitor with an initial voltage , the equation holds. Taking the Laplace transform on both sides gives: . This can be rearranged to give the voltage across the charged capacitor:

Where the first term is the voltage across the capacitor with no charge, and the second term is the effect of the initial charge = voltage source.

Inductors

For an inductor with an initial current , the equation holds. Taking the Laplace transform on both sides gives: . This can be rearranged to give the voltage across the inductor:

Where the first term is the voltage across the inductor with no initial current, and the second term is the effect of the initial current = voltage source.

 

Fourier Transform

The Fourier Transform is a Laplace transform where the real part of is zero.

The Fourier transform is defined as:

For periodic signals, the Fourier Series are used instead:

Useful Functions

The unit rectangular function:

The bandwidth of

The unit triangular function:

The interpolation function:

The function is an even function.

Fourier Transform Properties

The Fourier Transform of a system's impulse response is the system's Frequency Response

Connecting Laplace and Fourier Transform

Unlike the Laplace transformation, the Fourier transform doesn't have a Region Of Convergence.

Setting in the Laplace Transform yields:

It is therefore true that if is absolutely integrable where:

When is absolutely integrable, the ROC of its Laplace transform includes the imaginary axis. Therefore both the Laplace and Fourier transforms exist and .

When is not absolutely integrable, the ROC of its Laplace transform doesn't include the imaginary axis and the Fourier transform may not exist.

System Stability

A system is BIBO stable (Bounded Input Bounded Output) if every bounded input produces a bounded output.

BIBO stability exists when:

Often is a linear combination of causal exponential functions of the form . For stability, .

The constant which makes the denominator of is called a pole of the transfer function.

To achieve stability, the poles of the transfer function of a causal signal must lie in the left half of the -plane, i.e. all the poles have negative real parts

An LTI system is stable, only if the ROC of it's Laplace transform includes the imaginary axis.

Laplace vs Fourier

Laplace

Fourier

 

Frequency Response

We can sub into transfer functions and find the following results.

This is found by expressing in polar form:

We can also show that an LTI system's response to an everlasting exponential is

To find the frequency response, find and .

For an input of you sub in the value for and solve the equations.

You can then sub this into:

For input you ignore the until the end and you do:

Frequency response of a system that causes a delay of seconds

The transfer function of an ideal delay is

Therefore:

Delaying a signal by has no effect on amplitude, but introduces a linear phase shift with gradient .

The quantity is the Group Delay.

Frequency response of an ideal differentiator

The transfer function of an ideal differentiator is . Therefore:

Frequency response of an ideal integrator

The transfer function of an ideal differentiator is . Therefore:

Bode Plots

The poles are the roots of the denominator polynomial of the transfer function.

The zeroes are the roots of the numerator polynomial of the transfer function.

Amplitude Response

We let and the amplitude response can be rearranged as:

Which when logged can be expressed in blocks using logs:

The corner frequency is . At this frequency, the max error that occurs between the actual and asymptotic plots is .

The amplitude response is:

The bode diagram for this term consists of two asymptotic curves approached by for very small and large values of . These curves will intersect at .

Therefore the bode plot for this term would be a line along the axis and linear curve intersecting the axis at (corner frequency) with a slope of , where a decade is a tenfold increase in .

This is because: at ; at ; at . Thus the value of increases for every 10 fold increase in frequency. The asymptote therefore has a slope of .

When there are multiple terms, bode plot for each term is plotted like above, and then each is summed together to create the full bode plot for the full transfer function.

For a second order pole it is common to express this in the form: Where:

The exact log amplitude is:

This means that the log amplitude plot will be different for different values of .

For complex conjugate poles , whilst for the poles are real and each can be dealt with as a separate first order factor.

The amplitude plot for a pair of complex conjugate zeros is a mirror image about the axis of the plot for a pair of complex conjugate poles.

Phase Response

We let and the phase response becomes:

Consider a term:

The corner frequency is once again at . Like the amplitude response, the phase response is made up of asymptotic curves, however three are needed for phase response bode plot.

This means that for and there will be horizontal lines parallel to the axis at and respectively with a linear curve at an angle of connecting and .

Now consider the term:

When :

When :

(This means that the instead of a linear curve at an angle of , there will be a linear curve at an angle of .)

Similarly to the amplitude plot, the phase involves the parameter resulting in a different plot for each value of .

The asymptote for the phase of complex conjugate poles is a step function that is for and for , whilst the phase plot for complex conjugate zeros is the mirror image of those for the for complex conjugate poles.

The actual plot can be obtained if you add the error to the asymptotic plot.

 

Zeros, Poles, and Filters

The value of a transfer function at a complex frequency is:

The factor is a complex number represented by a vector from to in the complex plane.

can be rewritten:

If is negative, there is an additional phase .

Gain enhancement by a single pole

For a single pole at , the length of the line that connects the pole to the point is known as . In this situation:

At the value , is at it's minimum value. The value is the horizontal distance from the imaginary axis.

In the case of , the pole is on the imaginary axis meaning is also equal to zero, therefore making the gain infinite.

Adding more poles will change the location of the overall pole. To enhance the gain at a frequency , a pole can be placed opposite the point .

Recall that the poles must lie on the left half of the -plane.

Gain enhancement by a pair of complex conjugate roots

In reality, a complex pole at must be accompanied .

The amplitude response at a specific value of , is found by measuring the length of the two lines that connect the poles to the imaginary axis , and .

This doesn't affect the system behaviour since doesn't change much.

Gain suppression by a pair of complex conjugate zeros

For areal system with a pair of complex conjugate zeros at the amplitude response is:

Where and are the lengths from the zeros to the imaginary axis.

As the zero moves closer to the imaginary axis, the gain is suppressed.

Phase response due to a pair of complex conjugate poles or zeros

Angles formed by the poles ( respectively) at are equal and opposite.

As increases from , reduced in size, and increases in size.\ Therefore the sum of the angles increases continually approaching a as .

For conjugate poles:

Simplest lowpass filters

A lowpass filter is a system with a frequency response that has a maximum gain at .

The transfer function of a lowpass filter is:

in the numerator achieves . .

Butterworth lowpass filter

An ideal lowpass filter has a constant gain of up to a desired frequency and then the gain drops to .

For an ideal filter, the gain needs to be enhanced in the frequencies . Therefore a wall of poles is needed facing the imaginary axis opposite the range .

For a maximally flat response, a semicircle of infinite poles is needed.

In reality, only poles are possible, therefore the filter will be non ideal.

Bandpass filters

An ideal bandpass filter has a constant gain of 1 placed symmetrically around a desired frequency , otherwise the gain drops to .

Therefore two semicircles of poles are needed around .

Bandstop filters

An ideal bandstop filter has amplitude response of around the desired frequency , otherwise the gain is .

For a second order notch filter with zero gain at :

Butterworth Filters

For a normalised lowpass filter:

As this gives an ideal lowpass filter response ( if , if ).

Using we can show that:

The poles of are given by

Since we can rearrange to show:

Therefore the poles of lie along the unit circle. There are poles given by:

Since only matters, only the first poles on the left side of the unit circle matter.

Using the transfer function of the Butterworth filters can be shown to be:

Butterworth filters are a family of filters with poles distributed evenly around the left side of the unit circle where .

Butterworth Filters with Sallen-Key

For a Sallen Key filter with transfer function:

Assuming and is even:

Thus guaranteeing that has two poles at:

For an order filter filters must be cascaded.

When is odd, the remaining real pole can be implemented with an RC circuit. The RC circuit will have a pole at .

To design a Butterworth filter for a cut off frequency of , simply replace with .

 

Signal Transmission, Group Delay, and Windowing Effects

Distortionless transmission

Distortionless transmission implies that the output is the same as the input apart from:

Therefore, distortionless transmission implies that:

Taking the Fourier transform gives:

Meaning the transfer function can be written as:

Group Delay

Group delay can be used to assess phase linearity:

For a passband of width centred at .

Within the passband and for the phase can be described as:

The phase is always an odd function, therefore:

Therefore we can rewrite the equation for a distortionless system:

For a system with an input it can be obtained:

For a system with an input it can be obtained:

The output envelope remains undistorted.

The output carrier acquires an extra phase .

In a modulation system transmission is considered distortionless if the envelope remains undistorted. This is because the signal is contained solely in the envelope.

Parseval's Theorem

Energy of a signal:

The function is the energy spectral density.

If is a real signal, then and are conjugate, therefore is even.

Windowing and its effects

Extracting a segment of a signal in time, is the same as multiplying the signal with a rectangular window.

Energy is spread out from to a width of . Energy leaks out from the mainlobe to the sidelobes.

If has two spectral components of frequencies which differ by less than (The mainlobe width) then they will be indistinguishable from the truncated signal. The result is loss of spectral resolution.

Lobes

Remedying Truncation

  1. Make the mainlobe as narrow as possible (Wide a window as possible).
  2. Avoid big discontinuity in the windowing function ro reduce leakage (e.g. high frequency sidelobes).
  3. (1.) and (2.) are incompatible so compromises must be made. (Hamming, Hanning, Bartlett, Blackman, and Kaiser are all commonly used windows.)
WindowMainlobe widthRolloff rate Peak sidelobe level
Rectangular:
Bartlett:
Hanning:
Hamming:
Blackman:
Kaiser:

 

Sampling and Quantization

In discrete time systems, the continuous signals are converted into numbers (sampled) and processed before being converted back into a continous signal.

A sample is kept every units of time in a process called uniform sampling .

This ends up giving a a bunch of dirac deltas thatare equivalent to the value of the wave at a specific time.

Sampling Theroem

The minium sampling rate needed to reconstruct the signal is the Nyquist Rate for , and the corresponding interval for is called the Nyquist Inteval.

A bandpass signal whose spectrum exists over a frequency band also has a bandwith of where is equal to the maximum frequency .

If a signal is sampled at a frequency lower than the Nyquist rate then the resulting reconstruction results in a lot of overlap which corrupts the signal meaning you cannot recover the original signal.

Sampling a signal above the Nyquist rate results in larger gaps between the sampled parts of the signal meaning you can easily reconstruct the signal.

To reconstruct the signal, you use a lowpass (not necessarily idea) filter over the non overlapping parts of the sample.

Sampling Therorem Mathimatical Proof

The sampled version of can be expressed as:

We can express the impulse train as a Fourier Series:

Therefore:

Since :

Which is the sampled spectrum of a signal sampled at the Nyquist Rate.

Aliasing

Spectral Folding

In general, a sinusoid of frequency sampled at a frequency of samples per second, will result in a sampled sinusoid that appears as samples of a continuous-time sinusoid of frequency in the band , where:

Where is an integer.

E.g. a sinewave sampled at will get folded to .

Anti-Aliasing Filter

To avoid distortion after sampling where the signal may need a low pass filter with cutoff frequency applied before sampling.

This results in a signal with no aliasing but with a loss of higher frequencies.

This is good because without the lowpass filter, the higher frequencies will be lost as well as distortion of the lignal at lower frequencies due to the overlapping samples not being the same as the original wave.

Practical Sampling

Impluse trains aren't actually very practical sampling signals. In practise a train of pulses may be used instead. (This is like an ideal lowpass filter)

For the train of pulses used with a signal such as with the Fourier series is used again:

Therefore:

Which allows the use of a lowpass filter to recover .

Ideal Signal Reconstruction

The process of reconstructing a continuous time signal from its samples is also known as interpolation.

The filter used for reconstruction muct have a gain of and a bandwidth of any value between and .

A good choice is the middle value or giving a frequency response of:

In the time domain, the impulse response of this filter is given by:

For the Byquist sampling rate, , .

For all Nyquist sampling instants , except at .

Each sample in , being an impulse, generates a pulse of height equal to the strength of the sample when it is applied at the input of the filter exactly like convolution with an impulse. The addition of all impulses results in .

The filter output is:

In the case of the Nyquist sampling rate , the filter output becomes:

This is known as the interpolation formula.

Practical signal reconstruction

In practise a sampling frequency higher than the Nyquist sampling rate is used to allow for the use of non ideal low pass filters.

According to Paley-Weiner criterion, any filter that has a fequency response which is zero above a certain frequency, is not resizable (i.e. all practical filters).

This implies that in practise it is impossible to perfectly recreate a signal from its samples.

However, a filter with gradual cuttoff charateristics is easier to realise than an ideal fitler which is impossible.

Furthermore, as sampling rate increases, the recovered signal approaches the desired signal more closely.

Scalar Quantisation

The sampling theorem allows us to represent signals by means of samples.

In order to save this samples, they need to be converted from real values to a format that is in a format that fits the memory model of a computer.

The process that maps the continuous set to a discrete (countable) set is called quantization.

Quantization is irreversible meaning that approximation errors are introduced.

Usually the number of quantisation levels is a power of so that each symbol can be represented by a stream of bits.

Typically each sample is quantized independently (scalar quantisarion). If the quantisation widths are equal then the quantisation is uniform.

Where you assume that is some value above the maximum frequency.

Quantisation Error

Low rate quantisation introduces correlation between the quantisation error and the orginal signal.

This correlation leads to artifacting or distortion.

 

Discrete Fourier Transforms

To perform spectral analysis on an incoming continuous time signal using computers:

For a signal bandwidth limited to it can be reconstructed from samples taken at and has signal spectrum from Hertz.

This interval is known as the spectral width.

Spectrum Sampling Theorem

The time sampling theorem has a dual, the spectral sampling theorem.

The spectral sampling theorem states that the spectrum of a signal , time limited to a duration of seconds, can be rconstructed from the samples of taken at a rate of samples per , where .

From this, the periodic extension of with a period of can be constructed into the periodic signal .

This periodic signal can be expressed using a fourier series:

The result indicates that the coefficients of the Fourier Series for are the values of taken at integer multiples of and scaled by .

This implies that the spectrum of the periodic signal is the sampled version of the spectrum .

If the successive cycles of don't overlap, can be recovered from .\ This implies that can be reconstructed from its samples.

The samples are separated by a fundamental frquency or of the periodic signal .

Therefore the recovery condition is: .

Spectral Interpolation Formula

To reconstruct the spectrum from its samples, the samples must be taken at frequency intervals . If the sampling rate is then:

It has been proved that the signal interpolation formula can be used to recover a continuous signal from its sampled version and there is an equivalent spectral interpolation formula. Assuming that is time-limited to and centered at :

Proof of Spectral Interpolation Formula

It is known that:

Therefore:

Since can be rewritten and :

From :

Therefore:

Discrete Fourier Transform

Numerical computation of the Fourier Transform requires descrete samples since computers cannot work with contiuous samples.

Furthermore, the Fourier transform can only be calculated at some values of .

For a timelimited signal its spectrum will not be bandlimited. The spectrum of the sampled signal consists of repeating every with .

If the sampled signal is repeated periodically every samples/ then rhe samples are spaced at Hz.

Therefore, when a signal is sampled and periodically repeated, it's spectrum si also sampled and periodically repeated.

The number of samples of the discrete signal in one period is whilst the number of samples of the discrete spectrum in one period is . It is seen that:

The number of samples in a period of time is identical to the number of samples in a period of frequency.

Aliasing and Leakage

Since isn't bandlimtied, it will experience aliasing.

Furthermore if is not time limited it'll need to be truncated with a window function. This will lead to a leakage effect similar to the time signal sampling.

Formal Definition of Descrete Fourier Transform

If and are the and samples of and then:

It can be shown that and are related by the following:

There are the Direct and Inverse Descrete Fourier Transforms.

The essential bandwidth is calculated by finding where the amplitude response drops to .

Note: For proof of DFT rlationships see Appendix of Lecture 14 of Signals and Systems.

 

Z-Transform

The Z transform can be derived from the Laplace transform.

If is a discrete-time signal that's sampled every seconds:

Recalling that in the Laplace domain, and then the Laplace transform of is:

Defining :

From the Laplace time-shift property, is time advance by seconds, where is the sampling period.

Therefore corresponds to one sampling period delay.

As a result, all sampled data can be expressed in terms of .

More formally, the unilateral z-transform of a causal sampled sequence:

It is given by:

The bilateral z-transform for any sampled sequency is:

Comparison of Laplace, Fourier, and Z-transforms

 DefinitionPurposeSuitable for
Laplace TransformConverts integral differential equations to algebraic equations.Continuous-time signal systems analysis. Stable or unstable systems analysis.
Fourier TransformConverts finite energy signals to frequency domain representation.Continuous-time, stable systems. Convergent signals only. Best for steady-state.
Discrete Fourier Transformsampling period , Converts discrete time signals to discrete frequency domain.Discrete-time signals.
Z-TransformConverts differential equations into algebraic equations.Discrete-time systems and signal analysis.

Solving Z-Transform questions

To solve z-transform questions, use the equation for the sum of an infinite geometric series:

Where the ratio of convergence .

The values that the z-transform exists for are the Region-Of-Convergence, and are found by rearranging to .

The region of convergence is a circle of radius within the z (complex) plane.

The inverse z-transform is:

This is difficult to solve so similarly to the Laplace transform partial fractions can be used to work backwards, and then use the z-transform identities.

Mapping from s-plane to z-plane

Since where the s-plane can be mapped to the z-plane.

For , and . Therefore the imaginary axis of the s-plane is mapped to the unit circle on the z-plane.

For , . Therefore, the left half of the s-plane is mapped to the inner part of the unit circle on the z-plane.

For , . Therefore the right half od the s-plane is mapped to the outer part of the unit circle on the z-plane.

Finding the inverse z-transform in the case of complex poles

A discrete-time LTI system is stable if and only if the ROC of its system function includes the unit circle, .

A causal discrete-time LTI system with rational z-transform is stable, if and only if all the poles of lie inside the unit circle. e.g. they must have a magnitude smaller than .